Goto

Collaborating Authors

 opportunity and risk




Understanding Opportunities and Risks of Synthetic Relationships: Leveraging the Power of Longitudinal Research with Customised AI Tools

Ventura, Alfio, Köbis, Nils

arXiv.org Artificial Intelligence

This position paper discusses the benefits of longitudinal behavioural research with customised AI tools for exploring the opportunities and risks of synthetic relationships. Synthetic relationships are defined as "continuing associations between humans and AI tools that interact with one another wherein the AI tool(s) influence(s) humans' thoughts, feelings, and/or actions." (Starke et al., 2024). These relationships can potentially improve health, education, and the workplace, but they also bring the risk of subtle manipulation and privacy and autonomy concerns. To harness the opportunities of synthetic relationships and mitigate their risks, we outline a methodological approach that complements existing findings. We propose longitudinal research designs with self-assembled AI agents that enable the integration of detailed behavioural and self-reported data.


Safe Exploitative Play with Untrusted Type Beliefs

Li, Tongxin, Handina, Tinashe, Ren, Shaolei, Wierman, Adam

arXiv.org Artificial Intelligence

The combination of the Bayesian game and learning has a rich history, with the idea of controlling a single agent in a system composed of multiple agents with unknown behaviors given a set of types, each specifying a possible behavior for the other agents. The idea is to plan an agent's own actions with respect to those types which it believes are most likely to maximize the payoff. However, the type beliefs are often learned from past actions and likely to be incorrect. With this perspective in mind, we consider an agent in a game with type predictions of other components, and investigate the impact of incorrect beliefs to the agent's payoff. In particular, we formally define a tradeoff between risk and opportunity by comparing the payoff obtained against the optimal payoff, which is represented by a gap caused by trusting or distrusting the learned beliefs. Our main results characterize the tradeoff by establishing upper and lower bounds on the Pareto front for both normal-form and stochastic Bayesian games, with numerical results provided.


The Use of Synthetic Data to Train AI Models: Opportunities and Risks for Sustainable Development

Marwala, Tshilidzi, Fournier-Tombs, Eleonore, Stinckwich, Serge

arXiv.org Artificial Intelligence

In the current data driven era, synthetic data, artificially generated data that resembles the characteristics of real world data without containing actual personal information, is gaining prominence. This is due to its potential to safeguard privacy, increase the availability of data for research, and reduce bias in machine learning models. This paper investigates the policies governing the creation, utilization, and dissemination of synthetic data. Synthetic data can be a powerful instrument for protecting the privacy of individuals, but it also presents challenges, such as ensuring its quality and authenticity. A well crafted synthetic data policy must strike a balance between privacy concerns and the utility of data, ensuring that it can be utilized effectively without compromising ethical or legal standards. Organizations and institutions must develop standardized guidelines and best practices in order to capitalize on the benefits of synthetic data while addressing its inherent challenges.


Anthropomorphization of AI: Opportunities and Risks

Deshpande, Ameet, Rajpurohit, Tanmay, Narasimhan, Karthik, Kalyan, Ashwin

arXiv.org Artificial Intelligence

Anthropomorphization is the tendency to attribute human-like traits to non-human entities. It is prevalent in many social contexts -- children anthropomorphize toys, adults do so with brands, and it is a literary device. It is also a versatile tool in science, with behavioral psychology and evolutionary biology meticulously documenting its consequences. With widespread adoption of AI systems, and the push from stakeholders to make it human-like through alignment techniques, human voice, and pictorial avatars, the tendency for users to anthropomorphize it increases significantly. We take a dyadic approach to understanding this phenomenon with large language models (LLMs) by studying (1) the objective legal implications, as analyzed through the lens of the recent blueprint of AI bill of rights and the (2) subtle psychological aspects customization and anthropomorphization. We find that anthropomorphized LLMs customized for different user bases violate multiple provisions in the legislative blueprint. In addition, we point out that anthropomorphization of LLMs affects the influence they can have on their users, thus having the potential to fundamentally change the nature of human-AI interaction, with potential for manipulation and negative influence. With LLMs being hyper-personalized for vulnerable groups like children and patients among others, our work is a timely and important contribution. We propose a conservative strategy for the cautious use of anthropomorphization to improve trustworthiness of AI systems.


Britain's competition watchdog opens investigation into artificial intelligence market

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. Britain's competition watchdog said Thursday that it's opening a review of the artificial intelligence market, focusing on the technology underpinning chatbots like ChatGPT. The Competition Markets Authority said it will look into the opportunities and risks of AI as well as the competition rules and consumer protections that may be needed. SALES INDUSTRY'S'ALWAYS BE CLOSING' MANTRA COULD GET BOOST FROM AI The CEOs of Google, Microsoft and ChatGPT-maker OpenAI will meet Thursday with U.S. Vice President Kamala Harris for talks on how to ease the risks of their technology.


Artificial intelligence chatbots: Friend or foe?

#artificialintelligence

Breaking news at the time of writing is that American artificial intelligence (AI) company OpenAI has released Generative Pre-trained Transformer 4 – more commonly known as GPT-4 (14 March 2023). The launch of this latest multimodal large language tool further increases the AI opportunities and risks facing the insurance industry. This latest version of OpenAI's chatbot can respond to images and it processes around eight times as many words as the original ChatGPT model launched in November 2022. Trained on text taken from the internet, ChatGPT has been designed to provide quick and understandable answers to any question. Read: AI has'enormous potential benefits' for insurance but regulators should target'safe and responsible adoption' – Kennedys Ian McKenna, chief executive of the Financial Technology Research Centre, said: "If you look at what some of these chatbots can do now and extrapolate what they will be able to do in four or five years' time, it's really quite scary. "People won't have to remember facts and data in the same way and it will have an enormous impact on insurance on so many fronts.


Towards Security AI: Opportunity and Risks (New Article Video)

#artificialintelligence

Are you interested in, worried about, or already planning to implement intelligent automation for security purposes? I have clients who do that already--and, may I say, they do it well. A good example is when a well-tested, tuned and statistically validated machine learning model is used to reduce insurance fraud. Whilst still difficult, the value of automating security-related actions is high. I explain how to get started, how to prepare your data, and how the two fundamental ML approaches, supervised vs unsupervised, can work for a security application.


Aerial threat: rewards come with the AI revolution, but risks follow The Mandarin

#artificialintelligence

The changing parameters of opportunity and risk from the emerging AI revolution run much deeper than might be generally supposed, say Professor Anthony Elliott and Julie Hare. From personal virtual assistants and chatbots to self-driving vehicles and tele-robotics, AI is now threaded into large tracts of everyday life. It is reshaping society and the economy. Klaus Schwab, founder of the World Economic Forum, has said that today's AI revolution is "unlike anything humankind has experienced before". AI is not so much an advancement of technology, but rather the metamorphosis of all technology.